1 research outputs found

    Myoelectric Control for Active Prostheses via Deep Neural Networks and Domain Adaptation

    Get PDF
    Recent advances in Biological Signal Processing (BSP) and Machine Learning (ML), in particular, Deep Neural Networks (DNNs), have paved the way for development of advanced Human-Machine Interface (HMI) systems for decoding human intent and controlling artificial limbs. Myoelectric control, as a subcategory of HMI sys- tems, deals with detecting, extracting, processing, and ultimately learning from Electromyogram (EMG) signals to command external devices, such as hand prostheses. In this context, hand gesture recognition/classification via Surface Electromyography (sEMG) signals has attracted a great deal of interest from many researchers. De- spite extensive progress in the field of myoelectric prosthesis, however, there are still limitations that should be addressed to achieve a more intuitive upper limb pros- thesis. Through this Ph.D. thesis, first, we perform a literature review on recent research works on pattern classification approaches for myoelectric control prosthesis to identify challenges and potential opportunities for improvement. Then, we aim to enhance the accuracy of myoelectric systems, which can be used for realizing an accu- rate and efficient HMI for myocontrol of neurorobotic systems. Beside improving the accuracy, decreasing the number of parameters in DNNs plays an important role in a Hand Gesture Recognition (HGR) system. More specifically, a key factor to achieve a more intuitive upper limb prosthesis is the feasibility of embedding DNN-based models into prostheses controllers. On the other hand, transformers are considered to be powerful DNN models that have revolutionized the Natural Language Processing (NLP) field and showed great potentials to dramatically improve different computer vision tasks. Therefore, we propose a Transformer-based neural network architecture to classify and recognize upper-limb hand gestures. Finally, another goal of this thesis is to design a modern DNN-based gesture detection model that relies on minimal training data while providing high accuracy. Although DNNs have shown superior accuracy compared to conventional methods when large amounts of data are available for training, their performance substantially decreases when data are limited. Collecting large datasets for training may be feasible in research laboratories, but it is not a practical approach for real-life applications. We propose to solve this problem, by designing a framework which utilizes a combination of temporal convolutions and attention mechanisms
    corecore